Automation and Accountability in Decision Support System Interface Design

نویسنده

  • Mary L. Cummings
چکیده

When the human element is introduced into decision support system design, entirely new layers of social and ethical issues emerge but are not always recognized as such. This paper discusses those ethical and social impact issues specific to decision support systems and highlights areas that interface designers should consider during design with an emphasis on military applications. Because of the inherent complexity of socio-technical systems, decision support systems are particularly vulnerable to certain potential ethical pitfalls that encompass automation and accountability issues. If computer systems diminish a user’s sense of moral agency and responsibility, an erosion of accountability could result. In addition, these problems are exacerbated when an interface is perceived as a legitimate authority. I argue that when developing human computer interfaces for decision support systems that have the ability to harm people, the possibility exists that a moral buffer, a form of psychological distancing, is created which allows people to ethically distance themselves from their actions. Introduction Understanding the impact of ethical and social dimensions in design is a topic that is receiving increasing attention both in academia and in practice. Designers of decision support systems (DSS’s) embedded in computer interfaces have a number of additional ethical responsibilities beyond those of designers who only interact with the mechanical or physical world. When the human element is introduced into decision and control processes, entirely new layers of social and ethical issues (to include moral responsibility) emerge but are not always recognized as such. Ethical and social impact issues can arise during all phases of design, and identifying and addressing these issues as early as possible can help the designer to both analyze the domain more comprehensively as well as suggest specific design guidance. This paper discusses those accountability issues specific to DSS’s that result from introducing automation and highlight areas that interface designers should take into consideration. If a DSS is faulty or fails to take into account a critical social impact factor, the results will not only be expensive in terms of later redesigns and lost productivity, but possibly also the loss of life. Unfortunately, history is replete with examples of how failures to adequately understand decision support problems inherent in complex sociotechnical domains can lead to catastrophe. For example, in 1988, the USS Vincennes, a U.S. Navy warship accidentally shot down a commercial passenger Iranian airliner due to a poorly designed weapons control computer interface, killing all aboard. The accident investigation revealed nothing was wrong with the system software or hardware, but that the accident was caused by inadequate and overly complex display of information to the controllers (van den Hoven, 1994). Specifically, one of the primary factors leading to the decision to shoot down the airliner was the perception by the controllers that the airliner was descending towards the ship, when in fact it was climbing away from the ship. The display tracking the airliner was poorly designed and did not include the rate of target altitude change, which required controllers to “compare data taken at different times and make the calculation in their heads, on scratch pads, or on a calculator – and all this during combat” (Lerner, 1989). This lack of understanding the need for a human-centered interface design was again repeated by the military in the 2004 war with Iraq when the U.S. Army’s Patriot missile system engaged in fratricide, shooting down a British Tornado and an American F/A-18, killing three pilots. The displays were confusing and often incorrect, and operators, who only were given ten seconds to veto a computer solution, were admittedly lacking training in a highly complex management-by-exception system (32nd Army Air and Missile Defense Command, 2003). In both the USS Vincennes and Patriot missile cases, interface designers could say that usability was the core problem, but the problem is much deeper and more complex. While the manifestation of poor design decisions led to severe usability issues in these cases, there are underlying issues concerning responsibility, T h e J o u rn a l o f Te c h n o lo g y S tu d ie s 24 accountability, and social impact that deserve further analysis. Beyond simply examining usability issues, there are many facets of decision support system design that have significant social and ethical implications, although often these can be subtle. The interaction between cognitive limitations, system capabilities, and ethical and social impact cannot be easily quantified using formulas and mathematical models. Often what may seem to be a straightforward design decision can carry with it ethical implications that may go unnoticed. One such design consideration is the degree of automation used in a decision support system. While the introduction of automation may seemingly be a technical issue, it is indeed one that has tremendous social and ethical implications that may not be fully understood in the design process. It is critical that interface designers realize the inclusion of degrees of automation is not merely a technical issue, but one that also contains social and ethical implications. Automation in decision support systems In general, automation does not replace the need for humans; rather it changes the nature of the work of humans (Parasuraman & Riley, 1997). One of the primary design dilemmas engineers and designers face is determining what level of automation should be introduced into a system that requires human intervention. For rigid tasks that require no flexibility in decisionmaking and with a low probability of system failure, full automation often provides the best solution (Endsley & Kaber, 1999). However, in systems like those that deal with decision-making in dynamic environments with many external and changing constraints, higher levels of automation are not advisable because of the risks and the inability of an automated decision aid to be perfectly reliable (Sarter & Schroeder, 2001). Various levels of automation can be introduced in decision support systems, from fully automated where the operator is completely left out of the decision process to minimal levels of automation where the automation only presents the relevant data. The application of automation for decision support systems is effective when decisions can be accurately and quickly reached based on a correct and comprehensive algorithm that considers all known constraints. However, the inability of automation models to account for all potential conditions or relevant factors results in brittle-decision algorithms, which possibly make erroneous or misleading suggestions (Guerlain et al., 1996; Smith, McCoy, & C. Layton, 1997). The unpredictability of future situations and unanticipated responses from both systems and human operators, what Parasuraman et al. (2000) term the “noisiness” of the world makes it impossible for any automation algorithm to always provide the correct response. In addition, as in the USS Vincennes and Patriot missile examples, automated solutions and recommendations can be confusing or misleading, causing operators to make suboptimal decisions, which in the case of a weapons control interface, can be lethal. In addition to problems with automation brittleness, significant research has shown that there are many drawbacks to higher levels of automation that relegate the operator to a primarily monitoring role. Parasuraman (2000) contends that over-automation causes skill degradation, reduced situational awareness, unbalanced workload, and an over-reliance on automation. There have been many incidents in other domains, such as nuclear power plants and medical device applications, where confusing automation representations have led to lethal consequences. For example, in perhaps one of the most well-known engineering accidents in the United States, the 1979 cooling malfunction of one of the Three Mile Island nuclear reactors, problems with information representation in the control room and human cognitive limitations were primary contributors to the accident. Automation of system components and subsequent representation on the instrument panels were overly complex and overwhelmed the controllers with information that was difficult to synthesize, misleading, and confusing (NRC, 2004). The medical domain is replete with examples of problematic interfaces and ethical dilemmas. For example, in the Therac-25 cases that occurred between 1985-1987, it was discovered too late for several patients that the human-computer interface for the Therac-25, which was designed for cancer radiation therapy, was poorly designed. It was possible for a technician to enter erroneous data, correct it on the display so that the data appeared accurate, and then begin radiation treatments unknowingly with lethal levels of radiation. Other than an ambiguous “Malfunction 54” error code, there was no indication that the machine was delivering fatal doses of radiation (Leveson & Turner, 1995). T h e J o u rn a l o f Te c h n o lo g y S tu d ie s 25 Many researchers assert that keeping the operator engaged in decisions supported by automation, otherwise known as the human-centered approach to the application of automation, will help to prevent confusion and erroneous decisions which could cause potentially fatal problems (Billings, 1997; Parasuraman, Masalonis, & Hancock, 2000; Parasuraman & Riley, 1997). Reducing automation levels can cause higher workloads for operators; however, the reduction can keep operators cognitively engaged and actively a part of the decision-making process, which promotes critical function performance as well as situation awareness (Endsley, 1997). Higher workloads can be seen as a less-than-optimal and inefficient design approach, but efficiency should not necessarily be the primary consideration when designing a DSS. Keen and Scott-Morton (1978) assert that using a computer aid to improve the effectiveness of decision making is more important than improving the efficiency. Automation can indeed make a system highly efficient but ineffective, especially if knowledge needed for a correct decision is not available in a predetermined algorithm. Thus higher, more “efficient” levels of automation are not always the best selection for an effective DSS. While it is well established that the use of automation in human computer interfaces should be investigated fully from a design standpoint, there are also ethical considerations, especially for interfaces that impact human life such as weapon and medical interfaces. What might seem to be the most effective level of automation from a design viewpoint may not be the most ethical. The focus on the impact of automation on the user’s actions is a critical design consideration; however, another important point is how automation can impact a user’s sense of responsibility and accountability. In one of the few references in the technical literature on humans and automation that considers the relationship between automation and moral responsibility, Sheridan (1996) is wary of individuals “blissfully trusting the technology and abandoning responsibility for one’s own actions.” Overly trusting automation in complex system operation is a well-recognized decision support problem. Known as automation bias, humans have a tendency to disregard or not search for contradictory information in light of a computer-generated solution that is accepted as correct (Mosier & Skitka, 1996; Parasuraman & Riley, 1997). Automation bias is particularly problematic when intelligent decision support is needed in large problem spaces with time pressure like what is needed in command and control domains such as emergency path planning and resource allocation (Cummings, 2004). Moreover, automated decision aids designed to reduce human error can actually cause new errors in the operation of a system. In an experiment in which subjects were required to both monitor low fidelity gauges and participate in a tracking task, 39 out of 40 subjects committed errors of commission, i.e., these subjects almost always followed incorrect automated directives or recommendations, despite the fact that contraindications existed and verification was possible (Skitka et al., 1999). Automation bias is an important consideration from a design perspective, but as will be demonstrated in the next section, it is also one that has ethical implications as well. Automation and Accountability While automation bias can be addressed through training intervention techniques (Ahlstrom et al., 2003, however see Skitka, et al., 1999 for conflicting evidence), the degradation of accountability and abandonment of responsibility when using automated computer interfaces are much more difficult and ambiguous questions to address. Automated decision support tools are designed to improve decision effectiveness and reduce human error, but they can cause operators to relinquish a sense of responsibility and subsequently accountability because of a perception that the automation is in charge. Sheridan (1983) maintains that even in the information-processing role, “individuals using the system may feel that the machine is in complete control, disclaiming personal accountability for any error or performance degradation.” Some research on social accountability suggests that increasing social accountability reduces primacy effect, i.e., the tendency to best remember the salient cues that are seen first (Tetlock, 1983), which is akin to automation bias. Social accountability is defined as people having to explain and justify their social judgments about others. In theory, increased accountability motivates subjects to employ more self-critical and cognitively complex decision-making strategies (Tetlock & Boettger, 1989). However, previous studies on social accountability focused on human judgments T h e J o u rn a l o f Te c h n o lo g y S tu d ie s 26 about other humans and did not incorporate technology, specifically automation, so they are somewhat limited in the application of social accountability to the discussion of computers and accountability. Skitka, Mosier, and Burdick (2000) attempted to bridge the gap in researching accountability from a purely social perspective to one that included technology in the form of automation. The specific intent of this study was to determine the effects of social accountability on automation bias. Instead of being held accountable for their judgments about other people, subjects were required to justify strategies and outcomes in computerized flight simulation trials. The results showed that not only did increased social accountability lead to fewer instances of automation bias through decreased errors of omission and commission, but also improved overall task performance (Skitka, Mosier, &

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Error assessment in man-machine systems using the CREAM method and human-in-the-loop fault tree analysis

Background and Objectives: Despite contribution to catastrophic accidents, human errors have been generally ignored in the design of human-machine (HM) systems and the determination of the level of automation (LOA). This paper aims to develop a method to estimate the level of automation in the early stage of the design phase considering both human and machine performance. Methods: A quantita...

متن کامل

A Systems Approach to Information Technology (IT) Infrastructure Design for Utility Management Automation Systems

Almost all of electric utility companies are planning to improve their management automation system, in order to meet the changing requirements of new liberalized energy market and to benefit from the innovations in information and communication technology (ICT or IT). Architectural design of the utility management automation (UMA) systems for their IT-enabling requires proper selection of ...

متن کامل

Golightly, David and Dadashi, Nastaran (2016) How do principles for human-centred automation apply to Disruption Management Decision Support? In: 2016 IEEE International Conference on Intelligent Rail

While automation of signal and route setting is routine, the use of automation or decision support in disruption management processes is far less common. Such support offers significant advantages in optimising re-planning of both timetable and resources (crew and rolling stock), and has value in offering a 'shared view' of re-planning across the many actors manage disruption. If this vision is...

متن کامل

Effects of Time Pressure on the Use of an Automated Decision Support System for Strike Planning

This paper describes the results of an experiment designed to examine the effects of time pressure on behavioral patterns. The main research hypothesis is that people under time pressure tend to increasingly rely on automation in order to cope with the added workload. The context is that of a missile strike planner having to create a set of matches between resources (missiles) and requirements ...

متن کامل

Automation bias: a systematic review of frequency, effect mediators, and mitigators

Automation bias (AB)--the tendency to over-rely on automation--has been studied in various academic fields. Clinical decision support systems (CDSS) aim to benefit the clinical decision-making process. Although most research shows overall improved performance with use, there is often a failure to recognize the new errors that CDSS can introduce. With a focus on healthcare, a systematic review o...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006